Goto

Collaborating Authors

 apple inc


TradingAgents: Multi-Agents LLM Financial Trading Framework

Xiao, Yijia, Sun, Edward, Luo, Di, Wang, Wei

arXiv.org Artificial Intelligence

Significant progress has been made in automated problem-solving using societies of agents powered by large language models (LLMs). In finance, efforts have largely focused on single-agent systems handling specific tasks or multi-agent frameworks independently gathering data. However, multi-agent systems' potential to replicate real-world trading firms' collaborative dynamics remains underexplored. TradingAgents proposes a novel stock trading framework inspired by trading firms, featuring LLM-powered agents in specialized roles such as fundamental analysts, sentiment analysts, technical analysts, and traders with varied risk profiles. The framework includes Bull and Bear researcher agents assessing market conditions, a risk management team monitoring exposure, and traders synthesizing insights from debates and historical data to make informed decisions. By simulating a dynamic, collaborative trading environment, this framework aims to improve trading performance. Detailed architecture and extensive experiments reveal its superiority over baseline models, with notable improvements in cumulative returns, Sharpe ratio, and maximum drawdown, highlighting the potential of multi-agent LLM frameworks in financial trading. More details on TradingAgents are available at https://TradingAgents-AI.github.io.


AlphaFin: Benchmarking Financial Analysis with Retrieval-Augmented Stock-Chain Framework

Li, Xiang, Li, Zhenyu, Shi, Chen, Xu, Yong, Du, Qing, Tan, Mingkui, Huang, Jun, Lin, Wei

arXiv.org Artificial Intelligence

The task of financial analysis primarily encompasses two key areas: stock trend prediction and the corresponding financial question answering. Currently, machine learning and deep learning algorithms (ML&DL) have been widely applied for stock trend predictions, leading to significant progress. However, these methods fail to provide reasons for predictions, lacking interpretability and reasoning processes. Also, they can not integrate textual information such as financial news or reports. Meanwhile, large language models (LLMs) have remarkable textual understanding and generation ability. But due to the scarcity of financial training datasets and limited integration with real-time knowledge, LLMs still suffer from hallucinations and are unable to keep up with the latest information. To tackle these challenges, we first release AlphaFin datasets, combining traditional research datasets, real-time financial data, and handwritten chain-of-thought (CoT) data. It has a positive impact on training LLMs for completing financial analysis. We then use AlphaFin datasets to benchmark a state-of-the-art method, called Stock-Chain, for effectively tackling the financial analysis task, which integrates retrieval-augmented generation (RAG) techniques. Extensive experiments are conducted to demonstrate the effectiveness of our framework on financial analysis.


Can Large Language Models Beat Wall Street? Unveiling the Potential of AI in Stock Selection

Fatouros, Georgios, Metaxas, Konstantinos, Soldatos, John, Kyriazis, Dimosthenis

arXiv.org Artificial Intelligence

In the dynamic and data-driven landscape of financial markets, this paper introduces MarketSenseAI, a novel AI-driven framework leveraging the advanced reasoning capabilities of GPT-4 for scalable stock selection. MarketSenseAI incorporates Chain of Thought and In-Context Learning methodologies to analyze a wide array of data sources, including market price dynamics, financial news, company fundamentals, and macroeconomic reports emulating the decision making process of prominent financial investment teams. The development, implementation, and empirical validation of MarketSenseAI are detailed, with a focus on its ability to provide actionable investment signals (buy, hold, sell) backed by cogent explanations. A notable aspect of this study is the use of GPT-4 not only as a predictive tool but also as an evaluator, revealing the significant impact of the AI-generated explanations on the reliability and acceptance of the suggested investment signals. In an extensive empirical evaluation with S&P 100 stocks, MarketSenseAI outperformed the benchmark index by 13%, achieving returns up to 40%, while maintaining a risk profile comparable to the market. These results demonstrate the efficacy of Large Language Models in complex financial decision-making and mark a significant advancement in the integration of AI into financial analysis and investment strategies. This research contributes to the financial AI field, presenting an innovative approach and underscoring the transformative potential of AI in revolutionizing traditional financial analysis investment methodologies.


When Graph Data Meets Multimodal: A New Paradigm for Graph Understanding and Reasoning

Ai, Qihang, Zhou, Jianwu, Jiang, Haiyun, Liu, Lemao, Shi, Shuming

arXiv.org Artificial Intelligence

Graph data is ubiquitous in the physical world, and it has always been a challenge to efficiently model graph structures using a unified paradigm for the understanding and reasoning on various graphs. Moreover, in the era of large language models, integrating complex graph information into text sequences has become exceptionally difficult, which hinders the ability to interact with graph data through natural language instructions.The paper presents a new paradigm for understanding and reasoning about graph data by integrating image encoding and multimodal technologies. This approach enables the comprehension of graph data through an instruction-response format, utilizing GPT-4V's advanced capabilities. The study evaluates this paradigm on various graph types, highlighting the model's strengths and weaknesses, particularly in Chinese OCR performance and complex reasoning tasks. The findings suggest new direction for enhancing graph data processing and natural language interaction.


Exploring Large Language Models for Knowledge Graph Completion

Yao, Liang, Peng, Jiazhen, Mao, Chengsheng, Luo, Yuan

arXiv.org Artificial Intelligence

Knowledge graphs play a vital role in numerous artificial intelligence tasks, yet they frequently face the issue of incompleteness. In this study, we explore utilizing Large Language Models (LLM) for knowledge graph completion. We consider triples in knowledge graphs as text sequences and introduce an innovative framework called Knowledge Graph LLM (KG-LLM) to model these triples. Our technique employs entity and relation descriptions of a triple as prompts and utilizes the response for predictions. Experiments on various benchmark knowledge graphs demonstrate that our method attains state-of-the-art performance in tasks such as triple classification and relation prediction. We also find that fine-tuning relatively smaller models (e.g., LLaMA-7B, ChatGLM-6B) outperforms recent ChatGPT and GPT-4.


Improving TTS for Shanghainese: Addressing Tone Sandhi via Word Segmentation

Chen, Yuanhao

arXiv.org Artificial Intelligence

Tone is a crucial component of the prosody of Shanghainese, a Wu Chinese variety spoken primarily in urban Shanghai. Tone sandhi, which applies to all multi-syllabic words in Shanghainese, then, is key to natural-sounding speech. Unfortunately, recent work on Shanghainese TTS (text-to-speech) such as Apple's VoiceOver has shown poor performance with tone sandhi, especially LD (left-dominant sandhi). Here I show that word segmentation during text preprocessing can improve the quality of tone sandhi production in TTS models. Syllables within the same word are annotated with a special symbol, which serves as a proxy for prosodic information of the domain of LD. Contrary to the common practice of using prosodic annotation mainly for static pauses, this paper demonstrates that prosodic annotation can also be applied to dynamic tonal phenomena. I anticipate this project to be a starting point for bringing formal linguistic accounts of Shanghainese into computational projects. Too long have we been using the Mandarin models to approximate Shanghainese, but it is a different language with its own linguistic features, and its digitisation and revitalisation should be treated as such.


FoundationDB: A Distributed Key-Value Store

Communications of the ACM

FoundationDB is an open-source transactional key-value store created more than 10 years ago. It is one of the first systems to combine the flexibility and scalability of NoSQL architectures with the power of ACID transactions. FoundationDB adopts an unbundled architecture that decouples an in-memory transaction management system, a distributed storage system, and a built-in distributed configuration system. Each sub-system can be independently provisioned and configured to achieve scalability, high availability, and fault tolerance. FoundationDB includes a deterministic simulation framework, used to test every new feature under a myriad of possible faults. This rigorous testing makes FoundationDB extremely stable and allows developers to introduce and release new features in a rapid cadence. FoundationDB offers a minimal and carefully chosen feature set, which has enabled a range of disparate systems to be built as layers on top. FoundationDB is the underpinning of cloud infrastructure at Apple, Snowflake, and other companies, due to its consistency, robustness, and availability for storing user data, system metadata and configuration, and other critical information. Many cloud services rely on scalable, distributed storage backends for persisting application state. Such storage systems must be fault tolerant and highly available, and at the same time provide sufficiently strong semantics and flexible data models to enable rapid application development. Such services must scale to billions of users, petabytes or exabytes of stored data, and millions of requests per second. More than a decade ago, NoSQL storage systems emerged offering ease of application development, making it simple to scale and operate storage systems, offering fault-tolerance and supporting a wide range of data models (instead of the traditional rigid relational model). In order to scale, these systems sacrificed transactional semantics, and instead provided eventual consistency, forcing application developers to reason about interleavings of updates from concurrent operations. FoundationDB (FDB)3 was created in 2009 and gets its name from the focus on providing what we saw as the foundational set of building blocks required to build higher-level distributed systems.


%E9%A2%A8%E5%A4%89%E3%82%8F%E3%82%8A%E3%81%AA%E3%82%A2%E3%82%A4%E3%83%87%E3%82%A2%E3%81%AE%E6%B5%81%E5%85%A5%E3%81%AB%E5%9F%BA%E3%81%A5%E3%81%84%E3%81%A6%E8%BC%9D%E3%81%8D%E3%82%92%E6%94%BE%E3%81%A4

#artificialintelligence

Identification of the human voice from different patterns, noises, tones can be the biggest task to achieve. Extracting meaning of the word in context with the sentence can slow down the growth of this market. Resistance in adoption by end-users due to high installation cost associated with these solutions and services is hindering growth of the natural language processing market. Currently, the natural language processing and artificial intelligence is experiencing emergence of various service providers and individual solution providers, which are posing healthy competition to other established service provider. Some major players in this market are Apple Inc., Google Inc., Hewlett-Packard Co, IBM Corp., Microsoft Corp., 3M Co., Dolbey System, Inc., Netbase Solutions Inc., SAS Institute Inc. and others.


Three digital wellness trends to watch in 2022

#artificialintelligence

Fueled by urgent pandemic-related needs, the future of health and wellness is being shaped by factors ranging from emerging technologies and new apps to accelerated changes in behavior. Within weeks, patients and doctors adopted innovations such as telehealth, wearable tech, and mindfulness apps. Along the way, they have come to expect as commonplace more flexible, easy to use, and personalized digital health experiences. Our view is that rising demand for improved access to care, control, results, and hyper-efficacy will dominate the health care landscape in 2022. Amid this backdrop, we see three digital health care trends that we believe are worth keeping a close eye on this year.


Stock Price Prediction of Apple Inc Using Recurrent Neural Network

#artificialintelligence

Stock price prediction is definitely not an easy task as there are many factors that need to be taken into consideration. Overall market conditions, competitors' performance, new product releases, temper of global relations are just some key factors that have potential to increase or decrease stock prices. In addition to these, unexpected events may occur such as the corona virus situation we are currently experiencing. The factors we have listed up to now are hard to predict. Thus, we will put them aside throughout this post.